Current:Home > Contact-usIt takes a few dollars and 8 minutes to create a deepfake. And that's only the start-DB Wealth Institute B2 Expert Reviews
It takes a few dollars and 8 minutes to create a deepfake. And that's only the start
View Date:2024-12-24 03:42:18
At first glance, the video Ethan Mollick posted on LinkedIn last month looks and sounds like what you'd expect from a business professor at the University of Pennsylvania's Wharton School. Wearing a checked shirt, he's giving a talk about a topic he's deeply familiar with: entrepreneurship.
Sure, his delivery is stiff and his mouth moves a bit strangely. But if you didn't know him well, you probably wouldn't think twice.
But the video is not Ethan Mollick. It's a deepfake Mollick himself created, using artificial intelligence to generate his words, his voice and his moving image.
"It was mostly to see if I could, and then realizing that it's so much easier than I thought," Mollick said in an interview with NPR.
Like many who have been closely following the rapid acceleration in AI technology, Mollick is excited about the potential for these tools to change the way we work and help us be more creative.
But he's among a growing chorus of people worried that this proliferation of what's known as "generative AI" will supercharge propaganda and influence campaigns by bad actors.
Mollick teaches would-be entrepreneurs and executives about innovation. Lately he's gotten deeply into a new set of AI-powered tools that anyone can now use to create highly plausible images, text, audio and video — from chatbots like OpenAI's ChatGPT and Microsoft's Bing to image generators like DALL-E and Midjourney.
"I've stumbled into being a AI whisperer," Mollick said, laughing. He now requires his students to use AI and chronicles his own experiments on his social media feeds and newsletter.
Quick, easy and cheap
Mollick started with ChatGPT, the chatbot from OpenAI that exploded in popularity when it debuted in November and has kicked off a race among tech companies to launch generative AI.
"I said, 'Write a script that Ethan Mollick would say about entrepreneurship,' and it did a pretty good job," he said. Next, he turned to a tool that can clone a voice from a short audio clip.
"I gave it a minute of me talking about some unrelated topic like cheese and then pasted the speech in and it generated the sound file."
Finally, he fed that audio and a photo of himself into another AI app.
"You put in a script and it realistically moves the mouth around and moves the eyes around and makes you shrug. And that was all I needed," he said.
It was quick, easy and cheap. Mollick spent $11 and just eight minutes making it.
"By the end, I had me — a fake me — giving a fake lecture I've never given in my life, but sounds like me, in my fake voice," he said.
Mollick posted his experiment online as a demonstration, and a warning, that the risks from this kind of AI are not in the distant future — they're already here.
"I think people aren't worried enough about this," he said. "I'm somebody who's actually pretty pro this technology in a lot of ways. But I also think that we're not ready for the social implications of being able to spoof people at scale. ... The idea that you could do this for anyone is sort of a new phenomenon."
Fake video and images of Biden and Trump already exist
Concerns about deepfakes have been around for years. What's different now is technology has advanced and become accessible to anybody with a smartphone or computer.
People are having fun using them for jokes and memes, like a viral TikTok trend of videos using synthetic audio to spoof Presidents Donald Trump, Barack Obama and Joe Biden playing video games.
But deepfakes are already being used for political ends.
Jack Posobiec, a right-wing activist known for promoting the Pizzagate conspiracy theory, recently created a fake video of President Biden announcing a draft to send American soldiers to Ukraine.
While Posobiec explained that the video was a fake created by AI, he also described it as "a sneak preview, coming attractions, a glimpse into the world beyond."
Many people went on to share the video without any disclaimer that it's not real.
This week, AI-generated fake images depicting what it might look like if former President Trump were arrested were viewed by millions of Twitter users, amid speculation that a New York grand jury may soon indict the former president. A likely faked photo of Chinese leader Xi Jinping meeting with Russian President Vladimir Putin was also widely shared online.
AI-generated propaganda and scams are proliferating
The research firm Graphika identified the first known case of a state-aligned influence operation using deepfakes late last year. The researchers found pro-China bots sharing fake news videos, featuring AI-generated anchors, on Facebook and Twitter.
Meanwhile, scammers are using fake audio to steal money by posing as family members in crisis.
"The information ecosphere is going to get polluted," said Gary Marcus, a cognitive scientist at New York University who studies AI.
He says we're not prepared for what it means to live in a world full of AI-generated content, and he fears widespread access to this technology will further erode our ability to trust anything we see online.
"A bad actor can take one of these tools ... and use this to make unimaginable amounts of really plausible, almost terrifying misinformation that the average person is not going to recognize as misinformation," Marcus said.
"That may be complete with data, fake references to studies that haven't even existed before. And not just one story like this, which a human could write, but thousands or millions or billions, because you can automate these things."
Text from AIs will be harder to spot than pictures and video
Marcus and others watching the rapid release of AI to the public are particularly concerned about a new set of tools that create text — the technology that powers Bing, ChatGPT and Bard, the new chatbot Google released this week.
These tools are trained to identify patterns in language by ingesting vast swaths of text from the internet. They can generate news articles, essays, Twitter posts and conversations that sound like they were written by real people.
"Language models are a natural tool for propagandists," said Josh Goldstein, a research fellow at Georgetown University's Center for Security and Emerging Technology. He co-authored a recent paper examining how these AI-powered tools could be misused for influence operations.
"Using a language model, propagandists can create lots and lots of original text, and they can do it quickly and at little cost," he said.
That means a troll farm may need fewer workers and that widescale propaganda campaigns could be in reach for a larger variety of bad actors.
What's more, researchers have found AI-created content can be really convincing.
"You can generate persuasive propaganda, even if you're not entirely fluent in English, or even if you don't know the idioms of your target community," Goldstein said.
Generated text can also be harder to detect than faked video or audio. Online campaigns that use AI to write posts may appear to be more organic than the copy and paste messages usually associated with bots.
And even if AI-written content is not always successful at persuasion, for propagandists that's a feature, not a bug. The fear is this profusion of generated text will amplify what's called the "firehose of falsehood," a propaganda strategy that indiscriminately sprays out false and often contradictory messages.
Former Trump adviser Steve Bannon had another phrase for this: "flooding the zone with s***."
"If you want to flood the zone with s***, there is no better tool than this," Marcus said.
To be clear, researchers have not yet identified a propaganda or influence operation using generated text.
Companies are scrambling to build safeguards
The tech companies launching AI tools are scrambling to put guardrails in place to prevent abuse, as well as the technology's own habit of simply making things up (known in the field as "hallucinating") and behaving bizarrely.
But there are open source versions these companies don't control. And at least one powerful AI language tool, made by Facebook parent Meta, has already leaked online, where it was quickly posted to the anonymous message board 4chan.
Meanwhile, tech companies are rushing to incorporate AI into more and more products, from search to productivity tools to operating systems.
Aza Raskin, co-founder of the Center for Humane Technology, described it as "an arms race to arm every other arms race."
Raskin and his co-founder, Tristan Harris, are known for the documentary The Social Dilemma, where they raised alarms about the societal harms of social media. They've now turned their focus to warning about the next iteration of those harms enabled by irresponsibly released AI.
Raskin said he sees big potential benefits from AI and acknowledges we will all have to learn to live with and use these tools.
"But that's very different than having these technologies baked into fundamental infrastructure," like consumer software and social apps, "before we know that they are safe," he said.
Mollick, the Wharton professor who deepfaked himself, worries none of this will curb the speed of Silicon Valley's AI frenzy.
"The cat has come out of the bag," he said, "and we're all dealing with cats everywhere."
veryGood! (23)
Related
- Megan Fox and Machine Gun Kelly are expecting their first child together
- Jury sees bedroom photo of empty box that held gun used in Michigan school shooting
- African American English, Black ASL are stigmatized. Experts say they deserve recognition
- Keke Palmer, Jimmy Fallon talk 'Password' Season 2, best celebrity guests
- John Krasinski Reveals Wife Emily Blunt's Hilarious Response to His Sexiest Man Alive Title
- As TikTok bill steams forward, online influencers put on their lobbying hats to visit Washington
- 1000-Lb. Sisters' Amy Slaton is Serving Body in Video of Strapless Dress
- Caitlin Clark, Iowa set conference tournament viewership record after beating Nebraska
- Get Your Home Holiday-Ready & Decluttered With These Storage Solutions Starting at $14
- Renewed push for aid for radiation victims of U.S. nuclear program
Ranking
- Amazon Black Friday 2024 sales event will start Nov. 21: See some of the deals
- Wisconsin Legislature to end session with vote on transgender athlete ban, no action on elections
- Sting 3.0 Tour: Ex-Police frontman to hit the road for 2024 concerts
- Lily Allen says her children 'ruined my career' as a singer, but she's 'glad'
- Fate of Netflix Series America’s Sweethearts: Dallas Cowboys Cheerleaders Revealed
- National Republican Chairman Whatley won’t keep other job leading North Carolina GOP
- Man pleads guilty to murdering University of Utah football player Aaron Lowe
- Scott Peterson appears virtually in California court as LA Innocence Project takes up murder case
Recommendation
-
Rep. Michael McCaul of Texas says he was detained in airport over being ‘disoriented’
-
Private utility wants to bypass Georgia county to connect water to new homes near Hyundai plant
-
Dozens hurt by strong movement on jetliner heading from Australia to New Zealand
-
Jury sees bedroom photo of empty box that held gun used in Michigan school shooting
-
Blake Snell free agent rumors: Best fits for two-time Cy Young winner
-
Michigan man who was accidently shot in face with ghost gun sues manufacturer and former friend
-
Derrick Henry to sign with Baltimore Ravens on two-year contract, per reports
-
Michelle Yeoh Shares Why She Gave Emma Stone’s Oscar to Jennifer Lawrence